Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Reliability analysis models for replication-based storage systems with proactive fault tolerance
LI Jing, LUO Jinfei, LI Bingchao
Journal of Computer Applications    2021, 41 (4): 1113-1121.   DOI: 10.11772/j.issn.1001-9081.2020071067
Abstract268)      PDF (1396KB)(427)       Save
Proactive fault tolerance mechanism, which predicts disk failures and prompts the system to perform migration and backup for the data in danger in advance, can be used to enhance the storage system reliability. In view of the problem that the reliability of the replication-based storage systems with proactive fault tolerance cannot be evaluated by the existing research accurately, several state transition models were proposed for replication-based storage systems; then the models were implemented based on Monte Carlo simulation, so as to simulate the running of the replication-based storage systems with proactive fault tolerance; at last, the expected number of data-loss events during a period in the systems was counted. The Weibull distribution function was used to model the time distribution of device failure and failure repair events, and the impact of proactive fault tolerance mechanism, node failures, node failure repairs, disk failures and disk failure repairs on the system reliability were evaluated quantitatively. Experimental results showed that when the accuracy of the prediction model reached 50%, the reliability of the systems were able to be improved by 1-3 times, and compared with 2-way replication systems, 3-way replication systems were more sensitive to system parameters. By using the proposed models, system administrators can easily assess system reliability under different fault tolerance schemes and system parameters, and then build storage systems with high reliability and high availability.
Reference | Related Articles | Metrics
Commodity recommendation model based on improved deep Q network structure
FU Kui, LIANG Shaoqing, LI Bing
Journal of Computer Applications    2020, 40 (9): 2613-2621.   DOI: 10.11772/j.issn.1001-9081.2019112002
Abstract341)      PDF (1681KB)(677)       Save
Traditional recommendation methods have problems such as data sparsity and poor feature recognition. To solve these problems, positive and negative feedback datasets with time-series property were constructed according to implicit feedback. Since positive and negative feedback datasets and commodity purchases have strong time-series feature, Long Short-Term Memory (LSTM) network was introduced as the component of the model. Considering that the user’s own characteristics and action selection returns are determined by different input data, the deep Q network based on competitive architecture was improved: integrating the user positive and negative feedback and the time-series features of commodity purchases, a commodity recommendation model based on the improved deep Q network structure was designed. In the model, the positive and negative feedback data were trained differently, and the time-series features of the commodity purchases were extracted. On the Retailrocket dataset, compared with the best performance among the Factorization Machine (FM) model, W&D (Wide & Deep learning) and Collaborative Filtering (CF) models, the proposed model has the precision, recall, Mean Average Precision (MAP) and Normalized Discounted Cumulative Gain (NDCG) increased by 158.42%, 89.81%, 95.00% and 65.67%. At the same time, DBGD (Dueling Bandit Gradient Descent) was used as the exploration method, so as to improve the low diversity problem of recommended commodities.
Reference | Related Articles | Metrics
Real-time processing of space science satellite data based on stream computing
SUN Xiaojuan, SHI Tao, HU Yuxin, TONG Jizhou, LI Bing, SONG Yao
Journal of Computer Applications    2019, 39 (6): 1563-1568.   DOI: 10.11772/j.issn.1001-9081.2018122602
Abstract559)      PDF (855KB)(310)       Save
Concerning the increasingly high real-time processing requirement of space science satellite observed data, a real-time processing method of space science satellite data based on stream computing framework was proposed. Firstly, the data stream was abstractly analyzed according to the data processing characteristics of space science satellite. Then, the input and output data structures of each processing unit were redefined. Finally, the parallel data stream processing structure was designed based on the stream computing framework Storm to meet the requirements of parallel processing and distributed computing of large-scale data. The developed system for space science satellite data processing applying with this method was tested and analyzed. The results show that the data processing time is half of that of the original system under same conditions and the data localization strategy has higher throughput than round-robin strategy with the data tuple throughput increased by 29% on average. It can be seen that the use of stream computing framework can greatly shorten the data processing delay and improve the real-time performance of the space science satellite data processing system.
Reference | Related Articles | Metrics
Fast retrieval method of three-dimensional indoor map data based on octree
LYU Hongwu, FU Junqiang, WANG Huiqiang, LI Bingyang, YUAN Quan, CHEN Shijun, CHEN Dawei
Journal of Computer Applications    2019, 39 (1): 82-86.   DOI: 10.11772/j.issn.1001-9081.2018071646
Abstract292)      PDF (741KB)(254)       Save
To solve the low efficiency problem of data retrieval in indoor three-dimensional (3D) maps, an indoor 3D map data retrieval method based on octree was proposed. Firstly, the data was stored according to the octree segmentation method. Secondly, the data was encoded to facilitate addressing. Thirdly, the search data was filtered by adding a room interval constraint to the data. Finally, the indoor map data was retrieved. Compared with the search method without constraints, the search cost of the proposed method was reduced by 25 percentage points on average, and the search time was more stable. Therefore, the proposed method can significantly improve the application efficiency of indoor 3D map data.
Reference | Related Articles | Metrics
Classification method and updating mechanism of hierarchical 3D indoor map
FENG Guangsheng, ZHANG Xiaoxue, WANG Huiqiang, LI Bingyang, YUAN Quan, CHEN Shijun, CHEN Dawei
Journal of Computer Applications    2019, 39 (1): 78-81.   DOI: 10.11772/j.issn.1001-9081.2018071657
Abstract364)      PDF (713KB)(238)       Save
For the fact that existing map updating methods are not good at map updating in indoor map environments, a hierarchical indoor map updating method was proposed. Firstly, the activity of indoor objects was taken as a parameter. Then, the division of hierarchy was performed to reduce the amount of updated data. Finally, a Convolutional Neural Network (CNN) was used to determine the attribution level of indoor data in network. The experimental results show that compared with the version update method, the update time of the proposed method is reduced by 27 percentage points, and the update time is gradually reduced compared with the incremental update method after the update item number is greater than 100. Compared with the incremental update method, the update package size of the proposed method is reduced by 6.2 percentage points, and its update package is always smaller than that of the version update method before the data item number is less than 200. Therefore, the proposed method can significantly improve the updating efficiency of indoor maps.
Reference | Related Articles | Metrics
New 3D scene modeling language and environment based on BNF paradigm
XU Xiaodan, LI Bingjie, LI Bosen, LYU Shun
Journal of Computer Applications    2018, 38 (9): 2666-2672.   DOI: 10.11772/j.issn.1001-9081.2018030552
Abstract670)      PDF (1259KB)(360)       Save
Due to the problems of high degree of business coupling, insufficient description ability to object attributes and characteristics of complex scenes in the existing Three-Dimensional (3D) scene modeling models, a new scene modeling language and environment based on BNF (Backus-Naur Form) was proposed to solve the problems of 3D virtual sacrifice scene modeling. Firstly, the 3D concepts of scene object, scene object template and scene object template attribute were introduced to analyze the constitutional features of the 3D virtual sacrifice scene in detail. Secondly, a 3D scene modeling language with loose coupling, strong attribute description capability and flexible generality was proposed. Then, the operations of the scene modeling language were designed, so that the language could be edited by Application Programming Interface (API) calls, and the language supported the interface modeling. Finally, a set of Extensible Markup Language (XML) mapping methods was defined for the language. It made the scene modeling results stored in XML text format, improved the reusability of modeling results, and demonstrated the application of modeling. The application results show that the method enhances the support of new data type features, and improves the description of sequence attributes and structure attribute types, and improves the description capabilities, the versatility, the flexibility of complex scenes. The proposed method is superior to the method proposed by SHU et al. (SHU B, QIU X J, WANG Z Q. Survey of shape from image. Journal of Computer Research and Development, 2010, 47(3):549-560), and solves the problem of 3D virtual sacrifice scene modeling. The proposed method is also suitable for modeling 3D scenes with low granularity, multiple attribute components, and high coupling degree, and can improve modeling efficiency.
Reference | Related Articles | Metrics
File recovery based on SQlite content carving
MA Qingjie, LI Binglong, WEI Lina
Journal of Computer Applications    2017, 37 (2): 392-396.   DOI: 10.11772/j.issn.1001-9081.2017.02.0392
Abstract719)      PDF (739KB)(563)       Save
The SQLite is applied by a number of Instant Messaging (IM) softwares for history data storing. In the process of IM forensics, to impede the investigation of the judiciary, the important SQLite data are often hidden, deleted or overwritten by criminals. The current data recovery methods are inefficient and cannot extract the overwritten data. A content carving algorithm based on SQLite was proposed to resolve the above problems. According to SQLite database storage characteristics and data deletion mechanism, the free domain was regarded as units to form idle list, the page content was used to grain structural units of engraving, and the scattered blocks of data were spliced efficiently on the basis of the position of data overwritten. The experimental results show that the propsed SQLite content carving algorithm can effectively recover IM data in local and mobile terminals; the recovery rate reaches 100% when the database is not damaged, and the recovery rate still reaches about 50% when the delete area is overwritten in different degrees.
Reference | Related Articles | Metrics
Personalized recommendation technique for mobile life services based on location cluster
ZHENG Hui, LI Bing, CHEN Donglin, LIU Pingfeng
Journal of Computer Applications    2015, 35 (4): 1148-1153.   DOI: 10.11772/j.issn.1001-9081.2015.04.1148
Abstract623)      PDF (842KB)(571)       Save

Current mobile recommendation systems limit the real role of location information, because the systems just take location as a general property. More importantly, the correlation of location and the boundary of activities of users have been ignored. According to this issue, personalized recommendation technique for mobile life services based on location cluster was proposed, which considered both user preference in its location cluster and the related weight by forgetting factor and information entropy. It used fuzzy cluster to get the location cluster, then used forgetting factor to adjust the preference of the service resources in the location cluster. Then the related weight was obtained by using probability distribution and information entropy. The top-N recommendation set was got by matching the user preference and service resources. As a result, the algorithm can provide service resources with high similarities with user preference. This conclusion has been verified by case study.

Reference | Related Articles | Metrics
Improvement of WordNet application programming interface and its application in Mashup services discovery
ZENG Cheng, TANG Yong, ZHU Zilong, LI Bing
Journal of Computer Applications    2015, 35 (11): 3182-3186.   DOI: 10.11772/j.issn.1001-9081.2015.11.3182
Abstract528)      PDF (755KB)(784)       Save
The process of using the traditional WordNet Application Programming Interface (API) is based on the file operation, so each execution of API in WordNet library would lead to serious problem of time-consuming in process of text analysis and similarity calculation. Therefore, the improved solution of WordNet API was proposed. The process of constructing semantic net of WordNet concept was transferred into the computer memory; meanwhile several APIs which were convenient to calculate the similarity were added. These improvements would accelerate the process of tracking the relationship of concepts and text similarity calculation. This solution was already applied in the process of Mashup services discovery. The experimental results show that the use of improved API can effectively improve the query efficiency and recall of Mashup service.
Reference | Related Articles | Metrics
Digit recognition based on distance distribution histogram
WU Shao-hong WANG Yun-kuan SUN Tao LI Bing
Journal of Computer Applications    2012, 32 (08): 2299-2304.   DOI: 10.3724/SP.J.1087.2012.02299
Abstract1276)      PDF (942KB)(384)       Save
Due to the mutability of unstrained or handwritten digits, most algorithms in previous study either forfeited easy implementation for high accuracy, or vice versa. This paper proposed a new feature descriptor named Distance Distribution Histogram (DDH) and adapted Shape Accumulate Histogram (SAH) feature descriptor based on shape context which was not only easy to implement, but also was robust to noise and distortion. To make hybrid features more comprehensive, some other adapted topological features were combined. The new congregated features were complementary as they were formed from different original feature sets extracted by different means. What's more, they were not complicate. Meanwhile, three Support Vector Machine (SVM) with different feature vector were used as classifier and their results were integrated to get the final classification. The average accurate rate of several experiments based on self-established data sets, MNIST and USPS is as high as 99.21%, which demonstrates that the proposed algorithm is robust and effective.
Reference | Related Articles | Metrics
Image-pyramid-oriented linear quadtree coding and features
Jian-xun LI Bing SHEN Jian-hua GUO
Journal of Computer Applications    2011, 31 (04): 1148-1151.   DOI: 10.3724/SP.J.1087.2011.01148
Abstract1214)      PDF (645KB)(416)       Save
Based on the linear quadtree, an image-element coding method oriented to image pyramid was introduced. According to the coding rules and Bounding BOX (BBOX) recurrence formula, the corresponding feature, location feature, existence feature and neighborhood feature were analyzed. Furthermore, a global multi-resolution virtual terrain environment and zoom-in operation algorithm were constructed to test the coding method. The results show that the method can distinguish boundary image-element and neighborhood image-element rapidly, and has a higher rate than other similar algorithms.
Related Articles | Metrics
Blind watermarking algorithm for digital color images withstands cutting attacks
YIN De-hui,LI Bing-fa
Journal of Computer Applications    2005, 25 (04): 853-855.   DOI: 10.3724/SP.J.1087.2005.0853
Abstract1212)      PDF (197KB)(892)       Save

 Meaningful binary gray scale images was used as watermarking. Through Arnold transform, the watermarking image was hashed to eliminate the spatial correlation among pixels. Consequently, the algorithm’s ability to withstand attacks (for example cutting) was enhanced. The security of watermarking was enhanced by improving Arnold transform. With a quantification method, the original image is not required for extracting the watermarking. Numerical experiments show that the proposed algorithm is strong in withstanding cutting attacks. Even if more than one half of the image is cut off, the extracted watermarking still has good visual effects. After JPEG compression, adding noises, filtering, sharpening, blurring, contamination, distortion and other image processing or attacks, we can still extract relatively clear watermarking images.

Related Articles | Metrics
Effective descriptive model of E-commerce merchandise and the multi-website searching mechanism
PENG Dai-yi ,YIN De-hui, LI Bing-fa
Journal of Computer Applications    2005, 25 (02): 472-474.   DOI: 10.3724/SP.J.1087.2005.0472
Abstract1080)      PDF (144KB)(850)       Save
Although E-commerce was being popularized and applied all over the world, there were still unsolved problems in current E-commerce modes. Among them, the fact that there was no uniform description for E-merchandise is a critical limit for the development of E-commerce. Besides, there was not an effective mechanism for describing website relationship, enabling effective multi-website search. Aimed to solve above problems, this paper proposed an uniform description and a relationship model for E-merchandise, which were based on XML. Afterwards, a distributed intelligent agent framework was introduced to realize a multi-website searching mechanism for E-commerce based on prior relationship model.
Related Articles | Metrics
E-cheque payment system based on digital watermarking and digital signature
DAI Hua,ZHANG Lin-cong,LI Bing-fa
Journal of Computer Applications    2005, 25 (02): 403-406.   DOI: 10.3724/SP.J.1087.2005.0403
Abstract1509)      PDF (169KB)(1085)       Save

 The secure problems of E-cheque used in electronic commerce payment were studied. Based on the analysis of the developing circumstance and the security gap of it, a security guarantee system combining watermarking and digital signature were proposed. The double entity authentication and watermarking content authentication made it impossible to gain illegal access to E-cheque, to edit or to forge it. According to the analysis, the security, creditability and authenticity of the E-cheque can be achieved by the system.

Related Articles | Metrics